Contestability in government-related AI systems

Bridge Professor Susan Landau outlines recommendations for the use of AI in government systems from a recent workshop on automated systems, contestability, and the law.
Professor Susan Landau.

As artificial intelligence (AI) and machine learning (ML) become more common, several US government departments are considering how to apply these technologies to their work. Processes like reviewing and approving applications for Medicaid or veteran’s benefits could be made more efficient with the use of advanced automated decision-making systems. However, giving AI control over any part of a government process carries some risk.

To explore these risks and determine best practices for the use of AI in government, Professor Susan Landau of the Department of Computer Science co-organized a workshop on Advanced Automated Systems, Contestability, and the Law. Organizers including Landau, Steven M. Bellovin of Columbia University, James X. Dempsey of UC Berkeley, and Vice President of Research and the Managing Director of the AI Frontiers Lab, Ece Kamar, assembled a group of experts spanning government and law, as well as members of the tech industry, civil society, academia, and more. Several of the recommendations outlined in their final report, which have also appeared in other reports, have since been included in a White House memo on government use of artificial intelligence and machine learning systems. 

The report centers on the importance of contestability, or a person’s ability to meaningfully understand a decision and push back against it if they wish. For example, if a person applying for a government loan or claiming an IRS tax deduction is rejected, they should receive a clear reason for their rejection and should be able to appeal the rejection. In a recent Lawfare article covering the report titled, “Challenging the Machine: Insights from a Workshop on Contestability of Advanced Automated Systems” Landau and co-author Dempsey provide a hypothetical example of someone being rejected for a loan because their AI-determined credit score is below the threshold of approval. Even if a human reviews and makes the final decision in this case, the outcome is still significantly impacted by AI.

Landau advocates that these systems be developed with human factors in mind to ensure that they make fair decisions. “Unexplainable algorithms and opaque appeal processes can overwhelm people already on the margin in dealing with life’s necessities,” she writes. Contestability is also often a legal requirement making it even more important to implement in AI/ML systems.

Although it was developed for government, the report has a wide-reaching impact and can be applied broadly across areas such as hiring systems or systems for allocating police resources, among other examples. Beyond its practical applications, the recommendations can also help policymakers and educators as the AI landscape evolves. 

Department:

Computer Science